o0X0o

Members
  • Posts

    104
  • Joined

  • Last visited

Everything posted by o0X0o

  1. Thanks. I was cautious of upgrading as I didn't want to introduce more changes, but I've done it now (to 6.8.0) and the result is the same. Looks like I'm off to pickup a new drive so I can have space to move data and convert the file system...
  2. When I run it on the cache drive it seems to work But when I run it on disk 2 I get the same error as when I did it through the console ("reiserfsck --check /dev/md2"):
  3. Just for laughs I tried again, and to my surprise a file moved successfully.. so i tried a second time and got some new errors:
  4. I recently replaced a drive and despite some issues everything seemed like it was ok, until I noticed that the cache drive filled up because the mover was reporting read only errors. Here is an excerpt from the syslog of the last couple of manual triggers of the mover: I put unraid into maintenance mode and tried running "reiserfsck --check /dev/md2" but get this error: bread: Cannot read the block (943196360): (Invalid argument). Failed to open the device '/dev/md2': Unknown code er3k 125 Same message when using --fix. I have run a full SMART test on all drives and they come back fine. I have also enabled disk shares and can see data on the disk in question, so not sure why it would be considered read only. There was a 4kb file on the disk share for one of the files the mover could not move, which I deleted from the cache and then the disk, but it kept reappearing on the disk from somewhere. About an hour after I last tried, it's suddenly gone... What is most strange though, is that the dashboard reports the drive as being bigger than it actually is (3.9Tb instead of 3Tb): If you see my link above, Unraid previously identified that drive's capacity correctly. Not sure if I have a hardware or a software issue.. or gemlins?
  5. Thank you! The array is back up, I have access to my data, and it seems that disk 2 is rebuilding
  6. ^bump... I really need to get something off of the array (if it still works...)
  7. Any thoughts or feedback on this one? I think I might have had a power supply problem as well as at least one dying drive. As above, I have put the original Disk 3 back and am thinking of using the new drive that I purchased (and which I had started to use to replace Disk 3), to replace Disk 2 instead. Just seems odd that I'm getting the 'too many wrong disks' warning, and can't start the array, and I am unsure whether to initiate 'new config'... Thanks in advance.
  8. Thanks Squid, really not sure what's going on... some ageing gear in there... Noticed a lot of beeping during startup (which would eventually stop - maybe because drives would spin down), but it didn't seem to match mobo error codes I found online. So I have been mucking around endlessly with cables, splitters and PS, and have seen everything including 0 drives spin up. Right now I think I am closest to a working system, but the one offline drive is not the one I thought... Where I had previously been trying to replace Disk 3, I now cannot get Disk 2 going... however other drives connected to the same power/data cables are fine, so I think that's the worst drive. Disk 3 is what I had previously thought I needed to replace, and the rebuild process ran for a bit but never completed. I put the original Disk 3 back and assigned the new drive to Disk 2, but now I get "Too many wrong and/or missing disks!". Can I just run 'new config' or do I risk data loss?
  9. Hi all, Summer is upon us down under, and I've recently started receiving temperature alerts for some of my drives during the day. Today one drive was appearing as offline, so I threw in a new one to begin the replacement process, but the rebuild process is reporting heaps of errors: I haven't really shown my unraid server much love in a while, and I'm not exactly up to speed anymore. Is this something I should be concerned about? Appreciate any feedback on what's going on, and if I need to be taking any particular action. Thanks. tower-diagnostics-20191116-1306.zip
  10. Thankyou, I'll do that ASAP. I also want to remove the PATA drive (don't want to be caught in a similar position in the future should it fail). I will then pull one of the 1Tb SATA drives and replace with the new 3Tb drive. I did a search and found 2 very different approaches: [*]This one discusses removing from shares, renaming folders, and manually moving data: http://lime-technology.com/wiki/index.php/Shrink_array [*]This one moves the data to the cache drive, and creates a new config with the (old data) drive removed from the array: http://lime-technology.com/wiki/index.php/FAQ_remove_drive Both also use MC instead of windows explorer to move the data - is this just for performance (so that the data transfer is local to the unraid server, and not across the network)? The second option seems the simplest. Is there a preferred (or even alternative) method?
  11. Thankyou - I have swapped the drives back, initiated 'new config' and put the drives back in the same order. Parity rebuild is running, and my data seems to be back. While I am here, is it worth upgrading unraid?
  12. And now the web interface (and unmenu) is not responding. Should I follow this guide and remove the new drive?: http://lime-technology.com/wiki/index.php/UnRAID_Manual_5#Remove_one_or_more_data_disks Then re-add the old drive and create a new config?
  13. I had 9 data disks - one of which (the smallest) was a PATA drive. My system has been stable for a looong time, but I recently ran out of space so procured a 3Tb SATA as a replacement. Before I started the process there were no errors and parity was good. The only free slots were on a PCI-x controller. The first port I connected it to made boot take ages, and a number of IO errors on the screen. Shut it down and moved it to the other port and all looked good, so proceeded with the standard upgrade process. Web interface and shares were unresponsive for a period, however they came back and the process seemed to progress well. A refresh this morning shows it completed, however 2x drives (one of which is on the same controller) are now reporting massive error counts. Shares are up, however some won't open and those that do have almost no data in them. UPDATE: At the console I see a screen full of REISERFS errors for "(device md1)" I'm guessing that the problem is the PCI-x card (possibly in combination with mobo IRQ?) and that best course may be to rollback. I am nervous because parity seems to have been written to - is it as simple as pulling the new drive and replacing the old? I am still using 5.0 RC12a. Screenshot attached. Syslog is at 78Mb and still generating so I haven't attached yet. Will do so when it completes if anyone thinks it might be useful.
  14. Came across another unraid user with problems: http://lime-technology.com/forum/index.php?topic=24646.0 And I just read some comments elsewhere that users have needed to disable some on-board Marvell based SATA controllers to get this (PCI-x Marvell based SATA controller) to work... which typically means losing 2x SATA ports. I suspect this is the case for the above link (an old post I didn't want to dig up by replying to).
  15. Mine says V1.0 and uses a Marvell 88SE9230. It is also screen printed with both "Rocket 640L" and "RocketRAID 640L" (see pic) which is interesting... the box just says Rocket. The difference seems to be support for RAID 0/1 vs 0/1/5/10/JBOD, and I have seen suggestions elsewhere that the RocketRAID 640L actually uses a 88SE9235. If true, then I do indeed have a Rocket 640L. Highpoint website suggests that there is now a "2nd Gen" version of the card (however is only ever referred to as "RocketRAID"), and which is again reported to use a Marvell 88SE9235. I note in some of the links in your sig that you are/were using a RR620 (not a 640) with a patched kernel and script to enable support for Marvell 88SE9128, however my 640L uses a Marvell 88SE9230 Would be good to know what card, version and chip you have, as well as how many drives/which ports you are using!
  16. I think it's safe to make it official, and I have updated the wiki to hopefully avoid anyone else getting bitten: http://lime-technology.com/wiki/index.php/Hardware_Compatibility#Hardware_Known_to_NOT_Work
  17. FFS... I hate being a Linux noob - thankyou. I ran "umount /mnt/disk5" followed by "reiserfsck --check /dev/md5" and it's finally running!
  18. Parity check started automatically on last few reboots, and even if I stop the check and stop/start the array I still get "mounted with write permissions, cannot check". I presume starting in maintenance mode won't help. This particular unraid server runs 15a, so I think it has the safe-mode boot option if that would help (and allow me to use reiserfsck as per the link in your sig)? I have googled the error ("mounted with write permissions, cannot check it"), and the only links are to unraid forum pages where others have experienced, but have not stated what resolved (or seem to have rebuilt the drive). It is suggested a couple of times to unmount the disk (using "unmount /mnt/disk5"), but: this also doesn't work ("-bash: unmount: command not found) Doesn't the drive need to be mounted to be able to check it?
  19. There are two disks with issues: [*]Disk 4 - red-balled, was rebuilt but then appeared as unformatted. Used reiserfsck to rebuild tree but not all files restored and no lost+found. You suggested I run "reiserfsck –scan-whole-partition –rebuild-tree /dev/md4" but before I did Disk 5 re-balled as well so I dealt with that first. Once that was back lost+found appeared (IT-fairies?) but I didn't have permissions... once I gained access found it to be empty. Parity check then decided to run and ever since, all attempts to use reiserfsck result in "Partition /dev/mdX is mounted with write permissions, cannot check it" [*]Disk 5 - also red-balled but this time supposedly rebuilt without issue. Noticed reiserfs errors in syslog so ran"--check" on md5 and it suggested I run "--rebuild-tree" which ran for hours even though that disk had no files visible via the disk share. Unsure if it' was parity info that was being processed, or lost files... after a significant amount of time it appeared to be stuck on "unknown item type found" a bunch of gibberish then "<15>] - deleted". vpf# then incremented to end with "wrong offset is deleted". Parity check decided to run and now all attempts to use reiserfsck result in "Partition /dev/mdX is mounted with write permissions, cannot check it". Console reports that this drive has used 1Tb, however I can only see 3Gb of data on it (that I could not see before). There is also now a lost+found folder on the disk, but again I don't have permissions to this and properties suggest it is 0kb Parity check started automatically on last few reboots, and even if I stop the check and stop/start the array I still get "mounted with write permissions, cannot check". I presume starting in maintenance mode won't help. This particular unraid server runs 15a, so I think it has the safe-mode boot option if that would help? Not really sure where to go from here...
  20. unmenu screenshot attached Cannot seem to run reiserfsck at the moment due to the parity check running.
  21. syslog attached - thanks again syslog.txt
  22. FYI: 14hrs and 18 passes of memtest with no errors SMART report attached. P.S. Upon reboot it seems that a parity check has been initiated. Should I kill this in case it leads to data loss? md5.txt
  23. Have now completed 10 passes and no errors
  24. returned to console. last line in syslog is: reiserfsck[6598]: segfault at 30203ca ip 0804a998 sp bfd4ee90 error 4 in reiserfsck[8048000+4a000]
  25. Thanks. I ran "--check" on md5 and it suggested I run "--rebuild-tree" so kicked that off. Has been running for hours now even though that disk had no files visible via the disk share. Unsure if it's parity info that's being processed, or lost files... The count in the telnet session continues to tick over, however tower/main reports no space used on the drive and unmenu reports "-91579.00K" - weird in a few ways... but why take so long to remap a (supposedly) empty drive... The same drive passes a SMART test. EDIT: has been stuck for hours on "unknown item type found" a bunch of gibberish then "<15>] - deleted" EDIT2: vpf# has incremented in the last few minutes, and now ends with "wrong offset is deleted" so I will let it run overnight before I update again.