Shamutanti

Members
  • Posts

    19
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Shamutanti's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Thanks Jorge, I'm rebuilding parity now, seems to be going smoothly. I checked the cabling and the latch on that particular SATA connector wasn't clicking, so I've swapped that one to the drive I'm removing and ordered a replacement. I'll report back if any further problems crop up.
  2. Just an update on this; the disabled drive appears to be healthy and to have all the data intact (I've been able to mount it and browse the files with the array stopped). If so, probably the best plan will be to do a new configuration and rebuild parity? Specifically : Stop Array New config ; preserve current assignments Unassign the empty/unmountable disk (that I want to remove anyway) Do nothing with the disabled drive (that I think is fine in terms of data) start the array, which will rebuild parity. Does that sound reasonable? I'm happy to try and rebuild the data drive first (onto itself) but I don't know if that will work with an unmountable drive as well
  3. Hi, I was running the Shrink Array process (from here) to zero an empty drive (disk 2) out then remove it. However just as the zeroing started, I noticed that another drive (disk 15) was showing as disabled ('a mandatory smart command failed'). I assumed that it was a glitch, something to do with unmounting / zeroing the initial drive, so I stopped the zeroing process, and rebooted the server. Unfortunately I rebooted without grabbing any logs so I don't have anything from when the error happened; I've attached the current diagnostics anyway. The disabled drive is now reporting SMART normally, and showing no errors; however it is still in a disabled/emulated state. Also the drive I was zeroing, is showing as unmountable (because, I assume, it has been zeroed enough to wipe out the file system). I am reasonably confident all the drives are healthy, did a parity check a few days ago with no problems, and as of now none of the drives have any SMART errors. I also only upgraded to 6.9.2 from 6.8.3 earlier today but I assume that's not relevant. But I'm not sure how I should proceed to restore things to health. Should i rebuild the disabled drive? I assume it will be out of sync now with parity since I was writing zeros to the initial drive when it became disabled. But - can I do this with that (empty) drive unmountable? Or should I start the zeroing process again, let that finish (30hrs or so), remove that drive, and then rebuild the emulated drive? Or of course, something else altogether. Any advice gratefully recieved... tower-diagnostics-20220304-1951.zip
  4. I've recently installed a new drive, precleared it, and copied across the contents of an existing drive (for later removal, as I have no free bays). Then I've started putting new files onto the new drive. Everything worked fine for about 500Gb, but then the server locked up (the transfer stopped, a stream stopped playing, and the web GUI became unresponsive. I could still get in via the console, and I was able to reboot the server. When it came back up, starting the array took far longer than usual - a couple of mins to mount the disks, then it paused on 'starting services' for several minutes (although the drives/shares were accessible at this point), before finally presenting the normal 'main' page in the GUI. Streaming was again working fine, but when I tried to copy a single file to the new drive, it worked for about 700Mb, then failed. The GUI was still working, so I tried to stop the array, but it hung on 'syncing filesystems'. I could still access the server via the console. I pulled off diagnostics, that I've attached below. This is actually the second set, as I'd also accessed it when I'd noticed how long it was taking to start up the array. After some time I tried the powerdown command from the console, which appeared to work, but the server didn't actually turn off at all and eventually I had to hard-reset it. On restarting, I had the same very slow start for the array, but it's up now, and seems to be working fine for streaming; I haven't tried writing anything substantial yet (I did write a very small file and that worked fine). I couldn't see anything obvious in the syslog, except that the ReiserFS disks (I have a mix of those + xfs) were all 'replaying transactions', which would appear to be the cause of the slow disk mounting, but I couldn't see why that might be happening or anything much else (at one point there's a BTRFS warning about wrong free space on device loop0, and rebuilding free space cache, but I don't know what that means). Can any wiser minds shed any light? Edit - rebooting the server since and start times were normal, so it looks like that was a consequence (perhaps) of not shutting down properly. I will try writing data tomorrow... Edit - all seems ok now, so hopefully just some random glitch (and a consequent improper shutdown) Thanks, tower-diagnostics-20180527-1450.zip
  5. Hi - not sure if anyone can help; I've had a read around but I haven't quite managed to answer my questions. I'm aiming to replace an existing 3tb ReiserFS data disk with a new 8tb XFS drive, then physically remove the existing 3tb drive (to leave space for a future large drive when needed). Rather than convert everything now, I'll just add larger replacement XFS drives as I need more space. I'm running version 6.2.1, with one parity drive. I've followed the File System Conversion procedure, currently on step 8 (data is all copying with rsync to the new drive, about 80% done). I'd misunderstood what excluding the disk did (I thought it excluded it from parity, understand better now), so I can't simply remove the old disk and have parity stay valid. I believe I still follow the procedure down to step 16 (restart the array with 'parity already valid' checked.). But what should I do next? I could, I think, unassign the old drive then rebuild parity, but ideally I'd like to keep protected. Should I follow the procedure to Shrink Array? That is, remove the share exclusion, format the disk (as XFS) to clear it quickly, run the 'Clear Me' user script, and follow the assorted other steps to remove the drive? If that is the case, then my only remaining confusion would be whether to remove the exclusion, then format, or the other way round. But I'd also like to know if (a) there's a better/faster way of removing my now-unneeded duplicate data disk, and (b) since I'll be repeating the procedure at some future point, if my aim from the start is just to remove the original disk once the data is copied, is there a simpler overall method? Ideally I suppose to make a duplicate of the original disk, onto the new XFS file system, but without changing parity (so I could just copy the drive, swap assignments, and be done). Thanks in advance for any help that can be offered, TC
  6. My server case is a Xigmatek Elysium, with 4 of their 4-in-3 drive bays, all but filled with a mixture of 3- and 4tb drives; I want to start replacing drives with the 8tb Seagate Archive's. Unfortunately these drives lack the screw hole in the middle (they only have the holes at each end), so they can't be securely fastened into the drive cages (which use the front and middle holes in the drives). Is there a drive cage (still at least 4-in-3) that can be fitted in this case (without modifications...) to allow these drives to be used? Or some other simple solution, short of just starting another server? I haven't been able to find anything terribly helpful searching around online... Thanks,
  7. Thanks BJP - reseating connections and changing around cables (one or the other) has fixed it.
  8. I'm running unraid version 5.0rc11, had no problems before, but now when I run a parity check one of the drives throws up a lot of read errors over about 5 minutes or so mid-check; but no sync errors reported, and nothing particularly wrong I can see in the SMART report, but 204 errors showing on the drive in the unraid Main window. Syslog + SMART report attached. I have just bought a new drive for expansion, should I just copy all the data off the drive with the errors and bin it? I'm assuming that all the data is currently ok if there's no parity errors... Thanks, syslog_errors.txt smart_report.txt
  9. ok - a bit more searching and I found the answer anyway, it's a 'harmless' error which will get taken out as and when time allows... assuming that unraidd being 'Tainted' in my syslog and 'untainted' in the linked thread doesn't mean anything... http://lime-technology.com/forum/index.php?topic=25425.msg221266#msg221266 -------------------------------------------------------------------------------------------- I'm towards the end of a (regular) parity check (running 5.0rc11), and I've noticed that the syslog contains a series of errors, apparently related to a CPU stall (syslog attached below), happened once after a couple of hours, then a few times in quick succession about 7 hours later on; no parity errors reported so far (86% done) (and no SMART errors showing), and running at a decent speed. looking back the previous thirty or so days of the log are nothing but spinups and spindowns, and the server's been working fine; however I noticed that in fact the previous parity check was throwing up the same errors, I'd missed it at the time, but doesn't seem to have had any bad effect (hopefully). I suspect it may have something to do with the extension card I installed around then, an AOC-SAS2LP-MV8, which so far just has one drive running from it, as I didn't see these errors in parity checks before. Can anyone shed any light on what's going on? Thanks, syslog.txt
  10. Hi; one of the drives I've put in my array is a WD green, a WD30EZRX, and seems to have the problem with the rapidly increasing LCC; so, I've looked at the various ways of fixing this, and the simplest would appear to be with hdparm, from v. 9.39 this has a -J option to change the WDIDLE setting directly. Unfortunately the installed version is very slightly earlier, 9.37. (I've tried disabling APM altogether with hdparm -B but WD seem to have disabled this option in their firmware) Is upgrading hdparm possible/advisable, and if so could someone either explain or point me to a guide as to how to do go about it? Sourceforge has assorted .tar files for later versions, so I'm assuming the contents of one of these needs to move somewhere on the flash, possibly be compiled, then even more possibly be upgraded using something like upgradepkg? Or is there some more automated command for doing this (google suggests slapt-get, but that doesn't seem to be included in unRAID). As you may be able to tell I don't really have any experience with linux, so apologies if I'm not making any sense... (the only other options would appear to be to use idle3ctl, a native version of WDIDLE3, which appears to come with installation instructions I could puzzle through, or as a last resort the DOS version of WDIDLE3). Thanks,
  11. Well, I definitely don't know what I'm doing, but setting the parameter for that drive as per the link above solves the problem, so all good :-). Thanks for the help...
  12. Perfect, thank you (now I just need to find out the same thing, where to set the hpa_ok attribute :-))
  13. Hi, I've just put together a new server, precleared 3 drives (2 x 3TB, 1 x 4TB), and assigned the drives with the 4TB as parity. However, whilst doing the initial parity rebuild myMain was reporting a potential HPA due to detecting a 'non-standard drive size' (whilst the rebuild was ongoing it was also reporting an invalid size, but that looks to have been something to do with the rebuild process as that's gone now). The parity drive (+the two others) show as green in the unRaid main view. The motherboard is the ASRock 880GM-LE-FX, which as far as I'm aware doesn't have this HPA issue; the drive is a 4TB 7200rpm DeskStar. There's nothing in the syslog about an HPA, and I've run the 'hdparm -N /dev/[hs]d[a-z]' command, which reports no HPA (7814037168/7814037168 max sectors), so is this just myMain not properly recognising a 4TB drive? The OS is the current rc11. (all three drives are showing a smaller number of bytes in the unRaid main view than they were reporting to preclear, but I assume this is a consequence of the formatting?) Thanks,
  14. ty - that was exactly who I was looking at for the Norco :-) - but for the xigmatek they only have the version with the window, and I'd figured to save £20 or so and get the one without from overclockers :-). Although I suppose if I later repurposed the case for a quad-SLI water-cooled monstrosity I'd wish I had the window after all....
  15. Thanks for the feedback - it will be a while I think before I go much beyond 6-8 drives (partially for financial reasons, partially as I only have so much media to store right now), so when it comes to add more I can always decide to shift to hot-swap bays for the extras then. The case certainly is a beast, but it will fit down the side of a desk in my study for now, and in a cupboard later, so all good there. Also I've never actually built a complete pc before (swapped/installed components but never the whole thing), so plenty of room to work in sounds like a good idea to me :-). The specs looked fairly similar on the PSU's, so I'm guessing the Seasonic is just a better brand? (they're actually about the same price, and in fact if I buy this week I can get the Seasonic £10 cheaper :-))