ratmice

Members
  • Posts

    332
  • Joined

  • Last visited

Everything posted by ratmice

  1. So after having a bunch of problems last week my array is now in good shape. I happen to have a pre cleared 4TB drive that I want to deploy. The array has a bunch of 2TB drives that are getting a bit long in the tooth. Using SMART reports, is there a good way to choose which drive to replace, when all drives PASS? or do you just go by how old they are? I have attached SMART reports for the candidate drives if anyone with more experience parsing SMART reports would be kind enough to take a peek. Archive.zip
  2. Thanks to all that replied on this thread. I am now up and running, albeit having lost some data, but all in all not too bad
  3. So, I stopped the array, checked maintenance mode, pressed start and array unable to start in maintenance mode due to too many wrong/missing disks. AS I see it the options are set up a new config either just removing the parity disk, or removing both parity and the dead disk. If I take the dead disk out of the new config, will I still be able to access it outside the array to run fsck and/or clone it? I'm in over my head. LOL.
  4. How would one go about cloning an unmountable disk? I'm feeling pretty ignorant, right about now.
  5. Thats what I was thinking, but I am not fluent in log reading. I assume that the option to only checked removed is on the disk page in UnRAID when in maintenance mode, as I don't see mention of that option on the fsck manpage. note: I am working up the patience to stop that array and want all ducks in a row before I do so, last time I did it I could not start it again due to too many wrong/missing disks error and had to reboot. That's correct, at this point I just need to get the remaining disks up and running so I can resync parity. I was hoping to mount the failed disk out of the array and try to pull whatever info I could off it, but it seems like that may not happen.
  6. So a few days ago I was having an issue during a parity sync with a drive that was failing (see my post about it here). While trying to get the data off that drive, and replace it, the parity disk dropped out again. So I am now left with a disabled parity drive and an unmountable data disk. My limited knowledge tells me to just bite the bullet and set up a new config, leaving out the dead drive and parity disk, restart the array, resync parity, and be on my way. When I stop the array to do so, I get a message about too many wrong or missing disks, which seems correct (a dead drive and a wonky parity that's disabled). In preparation for this I shut down the server, reseated the SATA and power cables, rebooted and started a memtest going overnight (no errors found). This morning when the server rebooted I noticed an odd message during the boot sequence, which I had been seeing occasionally over the past little while. Tower login: XFS (md2): metadata I/O error: block 0xaed3fbe7 ("xlog_bread_noalign") error 5 numblks 1 So my questions are: 1. Does this message just indicate that UnRAID is unhappy with the unmountable, dead drive? 2. Would it be worth it to jump into maintenance mode and play around with the unmountable drive, with the hopes of making it mountable and pulling some more data off, before going the new config route? 3. Being new to 'new config' is it really as simple as reassigning all the remaining data disks, leaving off the dead disk and parity disk and restarting the array? 4. Anything else I am missing here? Thank you for your patience, if you're reading this far, my knowledge is limited and I really appreciate any help.
  7. Well, I was trying to get all the data off the failing drive, unfortunately it went belly up and is currently unmountable. Oh, and the parity drive was disabled again, during the copy, so there's that. I do have drives to replace the failed disk, and what I now assume is a failing parity drive, too. The new parity needs to be pre cleared which takes forever to do three cycles. So I'm hoping to replace the dead data drive (as a new empty drive), resync parity, and get the new parity drive pre clearing to replace the potentially failing one, while crossing my fingers it doesn't go belly up, too Fun times.
  8. Last week I awoke to a yellow triangle next to my parity drive which appears to mean parity is being emulated due to a disk problem. There were no other obvious errors with the array, and the SMART report of the parity disk appeared clean.. After some reading on the forum I figured that rebuilding parity was the way to go. I shut down the array, removed the parity disk (well inactivated it), restarted the array, stopped it again, chose the parity disk again and restarted the array. Parity started rebuilding. I went away for the weekend. On return I see the array is up and running, "parity is valid", all disks are green, and there is a data disk (#2) with thousands of read errors. I'm not really sure what to do now. I have attached diagnostics to help. I do still have a diagnostic report from right before the parity rebuild if that would help. I checked a bunch of random checksums from that disk and have not found any errors, yet. Any help would be greatly appreciated. tower-diagnostics-20170905-1802.zip
  9. Excellent, spare sets on the way. Thanks for the heads-up.
  10. That's sorta what I had assumed, but always good to get corroboration. Thanks.
  11. I kinda think that drive is not long for this world, too. Unfortunately it is an older drive :-( I thought it interesting that it kinda just disappears without any trace. Here's hoping that someone can shed some light on the info in the syslog. p.s. 'Fix Common Problems' complains that it doesn't know about 'dynamix.smart.drivedb.update' so it MAY be incompatible. p.p.s Thanks for not taking umbrage to me being unsure of the motives behind your 1st reply. So many things can be misconstrued without social interaction context.
  12. So, I can't tell if your comment is an honest question, or a snarky retort, so I'll assume the former: Well, that's part of what I'm trying to figure out. I have pre cleared multiple drives over the past few months with no problem. A different drive is pre clearing, happily, right now. There do seem to be some warnings/errors in the syslog and in my limited knowledge they do not seem to be related to SMART status. I'm just trying to get some definitive info about that DB plugin. As I understand it it just sets up a cron job to update the SMART DB, seems far-fetched to see how that would affect pre clearing in the manner I'm seeing, but that's why i'm asking people way smarter than I. I considered it (the DB plugin) being an issue after I had started a new pre clear. Once it gets through a full cycle i will remove the other plugin and try the wonky drive again. In the mean time I thought I'd ask about it.
  13. So having a strange problem. I have been trying to preclear a disk I had laying around. It starts normally, but a couple hours in it just disappears from the interface. No sign of it in the unassigned disk section of the dashboard, nor on the pre clear plugin page. Just gone. If I reboot the server, it's back. Not really sure what to check. I had just erased and zeroed the whole disk on my desktop machine prior to trying the pre clear. The syslog (attached) seems to have an error that indicates the drive size goes to zero ?!? Not sure what that means. Near the end of the log you'll see various entries for [sdn] that's the disk. tower-diagnostics-20170522-2349.zip
  14. Is the SMART driveDB Update plugin still useful? I can't seem to find any info about it and fix common problems yells about it.
  15. thanks. <DOH> really appreciate your help. However, in relation to the 1st issue above, the new preclear report, which is saved as a text file, contains a full page of this: tput: unknown terminal "screen" tput: unknown terminal "screen" tput: unknown terminal "screen" ... ... tput: unknown terminal "screen" tput: unknown terminal "screen" Obviously related to the status display issue when running the plugin. I know this was happening earlier, and was addressed in an update, but it seems to be rearing it's head again. I have attached diagnostics, should that be helpful. Nope, OSX is trying to open the report as an executable; open the file using TextEdit instead. tower-diagnostics-20160921-1330.zip
  16. On a different note, Below is what all my pre clear reports look like? Needless to say, not terribly helpful. Not sure what to make of this. Especially troubling due to the problem reported above with the progress display. I was hoping to use the reports to see what happened, but there is no useable info in them. Any help would be appreciated. Last login: Tue Sep 20 11:32:51 on ttys000 Mathews-MacBook-Pro-5:~ mat$ /Volumes/flash/preclear_reports/preclear_finish_VKHR63TX_2016-07-06 ; exit; /Volumes/flash/preclear_reports/preclear_finish_VKHR63TX_2016-07-06: line 1: Disk:: command not found /Volumes/flash/preclear_reports/preclear_finish_VKHR63TX_2016-07-06: line 2: syntax error near unexpected token `(' /Volumes/flash/preclear_reports/preclear_finish_VKHR63TX_2016-07-06: line 2: `smartctl 6.2 2013-07-26 r3841 [x86_64-linux-4.1.13-unRAID] (local build)' logout Saving session... ...copying shared history... ...saving history...truncating history files... ...completed. [Process completed]
  17. I'm seeing the same output, do we know what's going on here? I'm tempted to just let it finish, as the plug in status seems to be chugging along. note: I do have nerdtools installed and screen is active. UnRAID 6.1.9 pre clear plugin 2016.09.11a, using default script (not Joe's original)
  18. I just want to thank all those (especially RobJ) for the tips and procedures in this thread. As a novice user I had some trepidation about conversion of all my Reiser disks to XFS. I really wanted to get this done as there is no way to read a Reiser disk on OS X, and there is a FUSE plugin for XFS reads available. Plus, it's not a dead FS. I converted 18 disks (totaling about 60 TB) without a hiccup. I couldn't be happier. Thanks again to all those involved.
  19. Having the same issue, was this ever addressed? I have v.1.2 reinstalling allows me to enter new credentials, but it won't take them and open the app.
  20. Today I have noticed a strange issue with the main page of UnRAID GUI (and only the main page). The header (with the banner, tabs, UnRAID logo etc...) does not display and the footer (that shows: Array Started• Dynamix webGui v2016.06.18unRAID™ webGui © 2016, Lime Technology, Inc.) will appear in the middle of the page, and split apart on separate lines, rather than pegged to the bottom of the window. All other pages display normally. I have just added another drive and wonder if the sheer length of the page has something to do with it? This does not affect anything, other than being a little annoying when trying to use this machine to access my server, but I thought it might point out some wonky HTML in the Dynamix code. Is anybody else noticing this? note: if I run the cursor over the area where the tabs should be they do appear, but the other wonkiness remains.
  21. I'm on 6.1.9 Oh and invoking the script works fine it's just getting the plugin to do anything that doesn't.
  22. After upgrading to UnRAID 6.1.9 today, the preclear plugin no longer works. It shows drives to be precleared, and indicates that the script is present, but pressing the 'Start Preclear' text button does nothing. It seems like this problem was reported a little while back, but with no resolution. Any ideas? I did try to uninstall and reinstall the plugin, with no effect.
  23. Thanks for all the work you put into these plugins, t is very much appreciated. Here's hoping they didn't make it too difficult to getting plexpass working again.
  24. well I did 2 things, one load 'fail-safe' BIOS settings for my MoBo and restart. I then had to allow booting from the Flash drive, and restarted. Also changed the IP address to some random, fairly high number. I've been up for 24 hours without losing connectivity. I wonder if there was some kind of collision with the old IP, even though I didnt see anything else on the network using it. I re-added PMS and it worked first time.